Location-scale Meta-analysis and Meta-regression as a Tool to Capture Large-scale Changes in Biological and MethodologicalHeterogeneity: a Spotlight on Heteroscedasticity

Author

TBA

Published

January 27, 2025

1 Introduction (Outline)

Here, we illustrates how to fit location-scale meta-analysis and meta-regression models in R, using brms (Bayesian approach), demonstrating how to model both mean (location) and variance (scale) components under a meta-analytic framework.

We cover: 1. A dataset with a categorical moderator (e.g., habitat type). 2. A dataset with a continuous moderator (e.g., log-elevation). 3. An example showing how to examine publication biases in both location and scale parts (e.g., small-study divergence, Proteus effect).

For each example, we start with fitting the following location-scale model (or double-hierarchical model):

\[ y_i = \beta_0^{(l)} + u_{j[i]}^{(l)} + e_i^{(l)} + m_i^{(l)}, \] \[ \ln(\sigma_{e_i}) = \beta_0^{(s)} + u_{j[i]}^{(s)}. \] \[ \begin{equation} \begin{pmatrix} u_j^{(l)} \\ u_j^{(s)} \end{pmatrix} \sim \mathcal{N}\left( \begin{pmatrix}0 \\ 0\end{pmatrix}, \begin{pmatrix} \sigma_{u(l)}^{2} & \rho_u \sigma_{u(l)} \sigma_{u(s)} \\ \rho_u \sigma_{u(l)} \sigma_{u(s)} & \sigma_{u(s)}^{2} \end{pmatrix} \right). \end{equation} \]

\[ e_i \sim \mathcal{N}(0,\sigma_e^2), \space \text{and} \space m_i \sim \mathcal{N}(0,\sigma_{m_i}^2). \]

TODO - description….

For Example 3, we also use:

\[ \begin{equation} \begin{pmatrix} u_j^{(l)} \\ u_j^{(s)} \end{pmatrix} \sim \mathcal{N}\left( \begin{pmatrix}0 \\ 0\end{pmatrix}, \begin{pmatrix} \sigma_{u(l)}^{2} & 0 \\ 0 & \sigma_{u(s)}^{2} \end{pmatrix} \right). \end{equation} \]

Then, we fit the following location-scale meta-regression

\[ y_i = \beta_0^{(l)} + \beta_1^{(l)} x_{1i} + \cdots + \beta_p^{(l)} x_{pi} + u_{j[i]}^{(l)} + e_i^{(l)} + m_i^{(l)}, \] \[ \ln(\sigma_{e_i}) = \beta_0^{(s)} + \beta_1^{(s)} x_{1i} + \cdots + \beta_p^{(s)} x_{pi}, \]

\[ u_j^{(l)} \sim \mathcal{N}(0,\sigma_{u(l)}^{2}), \space e_i \sim \mathcal{N}(0,\sigma_e^2), \space \text{and} \space m_i \sim \mathcal{N}(0,\sigma_{m_i}^2) \] TODO - description….

For Example 1, we also fit a phylogenetic location-scale meta-regression as follows:

\[ y_i = \beta_0 + a_{k[i]} + s_{k[i]} + u_i + e_i + m_i \]

\[ \ln(\sigma_{e_i}) = \beta_0^{(s)} + \beta_1^{(s)} x_{1i} + \cdots + \beta_p^{(s)} x_{pi}, \] \[ a_k^{(l)} \sim \mathcal{N}(0,\sigma_{a(l)}^{2}\mathbf{A}), \space \text{and} \space s_k^{(l)} \sim \mathcal{N}(0,\sigma_{s(l)}^{2}). \]

with

\[ u_j^{(l)} \sim \mathcal{N}(0,\sigma_{u(l)}^{2}), \space e_i \sim \mathcal{N}(0,\sigma_e^2), \space \text{and} \space m_i \sim \mathcal{N}(0,\sigma_{m_i}^2). \]
TODO - description….

Later, we use metafor (a frequentist implementation) and blsmeta (another Bayesian implementation) to check if results from brms match those from metafor and blsmeta when we use the same model.

Please refer to the associated paper for theoretical background, formulas, and details on location-scale models in ecology and evolution.

2 Prerequisites

2.1 Loading pacakges

Code

# Attempt to load or install necessary packages
if(!require(pacman)) install.packages("pacman")
pacman::p_load(
  tidyverse,
  tidybayes,
  here,
  patchwork,
  orchaRd,  # see the note
  ggplot2,
  pander,
  brms,
  metafor,
  blsmeta # see the note
)
Note

not about functions to install…..

It’s important…

2.2 Adding custum functions

These functions are to visualize brms results

Code
# Function to get variable names dynamically
get_variables_dynamic <- function(model, pattern) {
  variables <- get_variables(model)
  variables[grep(pattern, variables)]
}

rename_vars <- function(variable) {
  variable <- gsub("b_Intercept", "b_l_int", variable)
  variable <- gsub("b_sigma_Intercept", "b_s_int", variable)  
  variable <- gsub("b_habitatterrestrial", "b_l_contrast", variable)            
  variable <- gsub("b_sigma_habitatterrestrial", "b_s_contrast", variable)
  variable <- gsub("b_methodpersisitent", "b_l_contrast", variable)            
  variable <- gsub("b_sigma_methodpersisitent", "b_s_contrast", variable)
  variable <- gsub("b_elevation_log", "b_l_slope", variable)            
  variable <- gsub("b_sigma_elevation_log", "b_s_slope", variable)
  variable <- gsub("sd_study_ID__Intercept", "sd_l_study_ID", variable)            
  variable <- gsub("sd_study_ID__sigma_Intercept", "sd_s_study_ID", variable)
  variable <- gsub("cor_study_ID__Intercept__sigma_Intercept", "cor_study_ID", variable)  
  return(variable)
}

# Function to visualize fixed effects
visualize_fixed_effects <- function(model) {
  fixed_effect_vars <- get_variables_dynamic(model, "^b_")
  if (length(fixed_effect_vars) == 0) {
    message("No fixed effects found")
    return(NULL)
  }
  
  tryCatch({
    fixed_effects_samples <- model %>%
      spread_draws(!!!syms(fixed_effect_vars)) %>%
      pivot_longer(cols = all_of(fixed_effect_vars), names_to = ".variable", values_to = ".value") %>%
      mutate(.variable = rename_vars(.variable))
    
    ggplot(fixed_effects_samples, aes(x = .value, y = .variable)) +
      stat_halfeye(
        normalize = "xy", 
        point_interval = "mean_qi", 
        fill = "lightcyan3", 
        color = "lightcyan4"
      ) +
      geom_vline(xintercept = 0, linetype = "dashed", color = "#005") +
      labs(y = "Fixed effects", x = "Posterior values") +
      theme_classic()
  }, error = function(e) {
    message("Error in visualize_fixed_effects: ", e$message)
    return(NULL)
  })
}

# Function to visualize random effects
visualize_random_effects <- function(model) {
  random_effect_vars <- get_variables_dynamic(model, "^sd_")
  random_effect_vars <- random_effect_vars[random_effect_vars != "sd_es_ID__Intercept"]
  if (length(random_effect_vars) == 0) {
    message("No random effects found")
    return(NULL)
  }
  
  tryCatch({
    random_effects_samples <- model %>%
      spread_draws(!!!syms(random_effect_vars)) %>%
      pivot_longer(cols = all_of(random_effect_vars), names_to = ".variable", values_to = ".value") %>%
      mutate(.variable = rename_vars(.variable)) #%>%
      #mutate(.value = .value)  # leave SD as it is
    
    ggplot(random_effects_samples, aes(x = .value, y = .variable)) +
      stat_halfeye(
        normalize = "xy", 
        point_interval = "mean_qi", 
        fill = "olivedrab3", 
        color = "olivedrab4"
      ) +
      geom_vline(xintercept = 0, linetype = "dashed", color = "#005") +
      labs(y = "Random effects (SD)", x = "Posterior values") +
      theme_classic()
  }, error = function(e) {
    message("Error in visualize_random_effects: ", e$message)
    return(NULL)
  })
}

# Function to visualize correlations
visualize_correlations <- function(model) {
  correlation_vars <- get_variables_dynamic(model, "^cor_")
  if (length(correlation_vars) == 0) {
    message("No correlations found")
    return(NULL)
  }
  
  tryCatch({
    correlation_samples <- model %>%
      spread_draws(!!!syms(correlation_vars)) %>%
      pivot_longer(cols = all_of(correlation_vars), names_to = ".variable", values_to = ".value") %>%
      mutate(.variable = rename_vars(.variable))
    
    ggplot(correlation_samples, aes(x = .value, y = .variable)) +
      stat_halfeye(
        normalize = "xy", 
        fill = "#FF6347", 
        color = "#8B3626"
      ) +
      geom_vline(xintercept = 0, linetype = "dashed", color = "#005") +
      labs(y = "Correlations", x = "Posterior values") +
      theme_classic()
  }, error = function(e) {
    message("Error in visualize_correlations: ", e$message)
    return(NULL)
  })
}

3 Example 1: Categorical Moderator (Thermal Tolerance Dataset)

Data source (illustrative): Pottier, P. et al. (2022). Developmental plasticity in thermal tolerance: Ontogenetic variation, persistence, and future directions. Ecology Letters, 25(10), 2245–2268.

3.1 Loading the data

This dataset (thermal.csv) has an effect size dARR (the developmental acclimation response ratio: the ratio of the slopes of acclimation) and sampling variance Var_dARR. Key moderators are two categorical variables: (1) habitat (e.g., aquatic vs. terrestrial: where animals live) and (2) method (initial vs. persisitent: experimental design where only temparature increase was applied during initial phases or over the entire experimental period).

Code
dat <- read.csv(here("data", "thermal.csv"))

# selecting varaibles we need
dat <- dat %>% select(dARR, Var_dARR, es_ID, population_ID, study_ID, exp_design, habitat)

# creating SE (sqrt(Var))
dat$si <- sqrt(dat$Var_dARR)

# creating a new variable (method) according to the oringal paper
dat$method <- ifelse(dat$exp_design == c("A", "B", "C"), "initial", "persisitent")

#head(dat)
str(dat)
## 'data.frame':    1089 obs. of  9 variables:
##  $ dARR         : num  0.0528 0.0428 0.054 0.0115 0.0503 ...
##  $ Var_dARR     : num  0.000171 0.000192 0.000239 0.000131 0.000175 ...
##  $ es_ID        : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ population_ID: int  1 1 2 2 2 3 3 3 3 3 ...
##  $ study_ID     : int  1 1 1 1 1 2 2 2 2 2 ...
##  $ exp_design   : chr  "D" "D" "D" "D" ...
##  $ habitat      : chr  "terrestrial" "terrestrial" "terrestrial" "terrestrial" ...
##  $ si           : num  0.0131 0.0139 0.0154 0.0114 0.0132 ...
##  $ method       : chr  "persisitent" "persisitent" "persisitent" "persisitent" ...

3.2 Creating a variance-covariance matrix for sampling errors

3.2.1 Assuming indepedence among effect sizes

If each effect size has an independent sampling error, we can build a diagonal matrix:

Code
vcv <- diag(dat$Var_dARR)
rownames(vcv) <- dat$es_ID
colnames(vcv) <- dat$es_ID

3.2.2 Modeling non-independence

In reality, non-independence of sampling errors are common (effect sizes are obtained partially or all from the same subjects or individuals)

Code
# here correlated structure or cluster is population_ID
# vcalc is from the metafor pacakge
# we will not sue this for models but this is for demonstration

vcv2 <- vcalc(vi = Var_dARR, 
             cluster = population_ID, 
             obs = es_ID, rho = 0.5, 
             data = dat)
rownames(vcv2) <- dat$es_ID
colnames(vcv2) <- dat$es_ID

3.3 Multilevel location-scale meta-analysis

Important

TODO - write the importance of modeling sampling error as the random effects and also fixing its variance as 1 and why

Code
form1 <- bf(dARR | se(si) ~ 1 + habitat, # this is e
            sigma ~ 1 + habitat
)

prior1 <- default_prior(form1, 
                        data = dat, 
                        family = gaussian()
)

ma1 <- brm(form1, 
           data = dat, 
           chains = 2, 
           cores = 2, 
           iter = 5000, 
           warmup = 2000,
           prior = prior1,
           control = list(adapt_delta = 0.99, max_treedepth = 20)
)

# this is wrong too - as it mulitple sigma^2 to sampling variance 
#(in meta-analysis, sampling varaince is assumed to be known)

form2 <- bf(dARR | se(si, sigma = TRUE) ~ 1 + habitat, # this is e
            sigma ~ 1 + habitat
)

prior2 <- default_prior(form1, 
                        data = dat, 
                        family = gaussian()
)

# this will run but this is a different model
ma2 <- brm(form2, 
           data = dat, 
           chains = 2, 
           cores = 2, 
           iter = 5000, 
           warmup = 2000,
           prior = prior1,
           control = list(adapt_delta = 0.99, max_treedepth = 20)
)
Code
#TODO - may need to rerun this model

# p in the random effects in location and scale parts allows correlation to be modeled 
form_ma1 <- bf(dARR
            ~ 1   +
              (1|p|study_ID) + # this is u_l (the between-study effect)
              (1|gr(es_ID, cov = vcv)), # this is m (sampling error)
            sigma ~ 1 + 
              (1|p|study_ID) # residual = the within-study effect goes to the scale
)


# Generate default priors
prior_ma1 <- default_prior(form_ma1, 
                          data=dat, 
                          data2=list(vcv=vcv),
                          family=gaussian())
prior_ma1$prior[5] = "constant(1)" # meta-analysis assumes sampling variance is known so fixing this to 1
prior_ma1

# fitting model
fit_ma1 <- brm(
  formula = form_ma1,
  data = dat,
  data2 = list(vcv=vcv),
  chains = 2,
  cores = 2,
  iter = 6000,
  warmup = 3000,
  prior = prior_ma1,
  control = list(adapt_delta=0.95, max_treedepth=15)
)

# save this as rds
saveRDS(fit_ma1, here("Rdata", "fit_ma1.rds"))
Code
fit_ma1 <- readRDS(here("Rdata", "fit_ma1.rds"))

summary(fit_ma1)
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: dARR ~ 1 + (1 | p | study_ID) + (1 | gr(es_ID, cov = vcv)) 
##          sigma ~ 1 + (1 | p | study_ID)
##    Data: dat (Number of observations: 1089) 
##   Draws: 2 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 6000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 1089) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     1.00      0.00     1.00     1.00   NA       NA       NA
## 
## ~study_ID (Number of levels: 150) 
##                                Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                      0.14      0.01     0.11     0.16 1.00
## sd(sigma_Intercept)                0.82      0.07     0.68     0.97 1.00
## cor(Intercept,sigma_Intercept)     0.38      0.12     0.12     0.59 1.00
##                                Bulk_ESS Tail_ESS
## sd(Intercept)                      1103     2257
## sd(sigma_Intercept)                1031     1766
## cor(Intercept,sigma_Intercept)      699     1347
## 
## Regression Coefficients:
##                 Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept           0.20      0.01     0.17     0.23 1.00     1078     1794
## sigma_Intercept    -2.08      0.09    -2.25    -1.91 1.00     1029     1954
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Note that there is non-zero variance (SD) in the between-study effect on the scale part as well as the location part. Also, there is a positive and significant correaltion (\(\rho_u = 0.38\)) between the location and scale between-study random effects.

3.3.1 Visualzing meta-analytic results

Code
# getting plots
plots_fit_ma1 <- list(
  visualize_fixed_effects(fit_ma1),
  visualize_random_effects(fit_ma1),
  visualize_correlations(fit_ma1)
)


plots_fit_ma1[[1]] / plots_fit_ma1[[2]] / plots_fit_ma1[[3]]

TODO - probably try to get - CV and I2????

3.4 Location-scale meta-regression with a categorical moderator (biological)

Code
#TODO - may need to rerun this model

# biological - meta-regression
form_mr1 <- bf(dARR
            ~ 1  + habitat +
              (1|study_ID) + # this is u_l
              (1|gr(es_ID, cov = vcv)), # this is m
            sigma ~ 1 + habitat
)


# create prior

prior_mr1 <- default_prior(form1, 
                        data = dat, 
                        data2 = list(vcv = vcv),
                        family = gaussian()
)

# fixing the variance to 1 (meta-analysis)
prior_mr1$prior[5] = "constant(1)"
prior_mr1 

# fitting model
fit_mr1 <- brm(form_mr1, 
            data = dat, 
            data2 = list(vcv = vcv),
            chains = 2, 
            cores = 2, 
            iter = 3000, 
            warmup = 2000,
            prior = prior_mr1,
            control = list(adapt_delta = 0.95, max_treedepth = 15)
)

# save this as rds

saveRDS(fit_mr1, here("Rdata", "fit_mr1.rds"))
Code
fit_mr1 <- readRDS(here("Rdata", "fit_mr1.rds"))

summary(fit_mr1)
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: dARR ~ 1 + habitat + (1 | study_ID) + (1 | gr(es_ID, cov = vcv)) 
##          sigma ~ 1 + habitat
##    Data: dat (Number of observations: 1089) 
##   Draws: 2 chains, each with iter = 3000; warmup = 2000; thin = 1;
##          total post-warmup draws = 2000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 1089) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     1.00      0.00     1.00     1.00   NA       NA       NA
## 
## ~study_ID (Number of levels: 150) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     0.13      0.01     0.11     0.16 1.00      543     1180
## 
## Regression Coefficients:
##                          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## Intercept                    0.22      0.02     0.19     0.25 1.00      774
## sigma_Intercept             -1.39      0.03    -1.44    -1.33 1.00     1376
## habitatterrestrial          -0.16      0.04    -0.23    -0.09 1.01      289
## sigma_habitatterrestrial    -1.18      0.08    -1.33    -1.02 1.00     1164
##                          Tail_ESS
## Intercept                    1290
## sigma_Intercept              1678
## habitatterrestrial            781
## sigma_habitatterrestrial     1582
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

3.4.1 Visualzing meta-regression results (biological)

Code
# getting plots
plots_fit_mr1 <- list(
  visualize_fixed_effects(fit_mr1),
  visualize_random_effects(fit_mr1)
)


plots_fit_mr1[[1]] / plots_fit_mr1[[2]]

3.4.2 Visualzing effect sizes with a orchard plot (biological)

Code
# using metafor and use heteroscad model
# see - Nakagawa S, Lagisz M, O'Dea RE, Pottier P, Rutkowska J, Senior AM, Yang Y, Noble DW. orchaRd 2.0: An R package for visualising meta-analyses with orchard plots.
mr1_mod <- rma.mv(yi = dARR, 
                  V =  vcv,
                  mod = ~ habitat,
                  random = list(~1 | study_ID,
                                ~habitat | es_ID),
                  struct = "DIAG",
                  data = dat, 
                  test = "t",
                  sparse = TRUE,
                  control=list(optimizer="optim", optmethod="Nelder-Mead")
)

orchard_plot(mr1_mod, mod = "habitat", group = "study_ID", xlab = "Effect size")

3.5 Location-scale meta-regression with a categorical moderator (methodological)

Code
#TODO - may need to rerun this model

form_mr2 <- bf(dARR
            ~ 1  + method +
              (1|study_ID) + # this is u
              (1|gr(es_ID, cov = vcv)), # this is m
            sigma ~ 1 + method
)


# create prior

prior_mr2 <- default_prior(form_mr2, 
                        data = dat, 
                        data2 = list(vcv = vcv),
                        family = gaussian()
)

# fixing the varaince to 1 (meta-analysis)
prior_mr2$prior[5] = "constant(1)"
prior_mr2 
# fit model

fit_mr2 <- brm(form_mr2, 
            data = dat, 
            data2 = list(vcv = vcv),
            chains = 2, 
            cores = 2, 
            iter = 3000, 
            warmup = 2000,
            #backend = "cmdstanr",
            prior = prior_mr2,
            #threads = threading(9),
            control = list(adapt_delta = 0.99, max_treedepth = 15)
)

# save this as rds

saveRDS(fit1b, here("Rdata", "fit1b.rds"))
Code
fit_mr2 <- readRDS(here("Rdata", "fit_mr2.rds"))

summary(fit_mr2)
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: dARR ~ 1 + method + (1 | study_ID) + (1 | gr(es_ID, cov = vcv)) 
##          sigma ~ 1 + method
##    Data: dat (Number of observations: 1089) 
##   Draws: 2 chains, each with iter = 3000; warmup = 2000; thin = 1;
##          total post-warmup draws = 2000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 1089) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     1.00      0.00     1.00     1.00   NA       NA       NA
## 
## ~study_ID (Number of levels: 150) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     0.12      0.01     0.10     0.15 1.00      718     1373
## 
## Regression Coefficients:
##                         Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## Intercept                   0.24      0.02     0.21     0.28 1.00     1563
## sigma_Intercept            -1.57      0.06    -1.68    -1.45 1.00     1806
## methodpersisitent          -0.07      0.02    -0.10    -0.03 1.00     3050
## sigma_methodpersisitent     0.21      0.07     0.07     0.34 1.00     1967
##                         Tail_ESS
## Intercept                   1667
## sigma_Intercept             1343
## methodpersisitent           1665
## sigma_methodpersisitent     1364
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

3.5.1 Visualzing meta-regression results (methodological)

Code
# getting plots
plots_fit_mr2 <- list(
  visualize_fixed_effects(fit_mr2),
  visualize_random_effects(fit_mr2)
)


plots_fit_mr2[[1]] / plots_fit_mr2[[2]]

3.5.2 Visualzing effect sizes with a orchard plot (methodolgical)

Code
# using metafor and use heteroscad model
# see - Nakagawa S, Lagisz M, O'Dea RE, Pottier P, Rutkowska J, Senior AM, Yang Y, Noble DW. orchaRd 2.0: An R package for visualising meta-analyses with orchard plots.
mr2_mod <- rma.mv(yi = dARR, 
                  V =  vcv,
                  mod = ~ method,
                  random = list(~1 | study_ID,
                                ~method | es_ID),
                  struct = "DIAG",
                  data = dat, 
                  test = "t",
                  sparse = TRUE,
                  control=list(optimizer="optim", optmethod="Nelder-Mead")
)

orchard_plot(mr2_mod, mod = "method", group = "study_ID", xlab = "Effect size")

3.6 Location-scale phylogenetic meta-regression (biological)

TODO - the tree can be obtained from the code ….

3.6.1 Visualzing phylogenetic meta-regression results

4 Example 2: Continuous Moderator (Elevation Dataset)

Data source (illustrative): Midolo, G. et al. (2019). Global patterns of intraspecific leaf trait responses to elevation. Global Change Biology, 25(7), 2485–2498.

4.1 Loading the data

This dataset (elevation.csv) has an effect size (standarised mean difference: SMD or d) and sampling variance Var_dARR. A key moderator is elevation_log (i.e., which elevation on the ln scale, organisms live).

Code
dat <- read.csv(here("data", "elevation.csv"))

dat <- escalc(measure = "ROM", 
              m1i = treatment, 
              m2i = control, 
              sd1i = sd_treatment, 
              sd2i = sd_control, 
              n1i = n_treatment, 
              n2i = n_control, 
              data = dat)


# renaming
dat$study_ID <- as.factor(dat$Study_ID)
dat$es_ID <- as.factor(1:nrow(dat))

# selecting varaibles we need
dat <- dat %>% select(yi, vi, es_ID, study_ID, elevation_log)

#head(dat)
str(dat)
## Classes 'escalc' and 'data.frame':   1294 obs. of  5 variables:
##  $ yi           : num  -0.2071 -0.1881 0.0903 0.0768 -0.0899 ...
##   ..- attr(*, "ni")= int [1:1294] 12 12 12 12 20 20 20 20 90 90 ...
##   ..- attr(*, "measure")= chr "ROM"
##  $ vi           : num  0.022603 0.013296 0.014744 0.016572 0.000585 ...
##  $ es_ID        : Factor w/ 1294 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
##  $ study_ID     : Factor w/ 71 levels "1","2","3","4",..: 3 3 3 3 32 32 32 32 36 36 ...
##  $ elevation_log: num  5.52 6.06 4.79 6.06 5.14 ...
##  - attr(*, "yi.names")= chr "yi"
##  - attr(*, "vi.names")= chr "vi"
##  - attr(*, "digits")= Named num [1:9] 4 4 4 4 4 4 4 4 4
##   ..- attr(*, "names")= chr [1:9] "est" "se" "test" "pval" ...

4.2 Creating a variance-covariance matrix for sampling errors

4.2.1 Assuming indepedence among effect sizes

If each effect size has an independent sampling error, we can build a diagonal matrix:

Code
vcv <- diag(dat$vi)
rownames(vcv) <- dat$es_ID
colnames(vcv) <- dat$es_ID

4.3 Multilevel location-scale meta-analysis

Code
#TODO - may need to rerun this model

form_ma2 <- bf(yi
             ~ 1   +
               (1|p|study_ID) + # this is u
               (1|gr(es_ID, cov = vcv)), # this is m
             sigma ~ 1 + (1|p|study_ID)
)



prior_ma2 <- default_prior(form_ma2, 
                         data = dat, 
                         data2 = list(vcv = vcv),
                         family = gaussian()
)

# fixing the varaince to 1 (meta-analysis)
prior_ma2$prior[5] = "constant(1)"
prior_ma2 
# fit model

fit_ma2 <- brm(form_ma2, 
             data = dat, 
             data2 = list(vcv = vcv),
             chains = 2, 
             cores = 2, 
             iter = 6000, 
             warmup = 3000,
             prior = prior_ma2,
             control = list(adapt_delta = 0.95, max_treedepth = 15)
)

# save this as rds

saveRDS(fit_ma2, here("Rdata", "fit_ma2.rds"))
Code
fit_ma2 <- readRDS(here("Rdata", "fit_ma2.rds"))

summary(fit_ma2)
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: yi ~ 1 + (1 | p | study_ID) + (1 | gr(es_ID, cov = vcv)) 
##          sigma ~ 1 + (1 | p | study_ID)
##    Data: dat (Number of observations: 1294) 
##   Draws: 2 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 6000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 1294) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     1.00      0.00     1.00     1.00   NA       NA       NA
## 
## ~study_ID (Number of levels: 71) 
##                                Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                      0.11      0.02     0.08     0.14 1.00
## sd(sigma_Intercept)                0.74      0.09     0.60     0.93 1.00
## cor(Intercept,sigma_Intercept)     0.39      0.15     0.07     0.65 1.00
##                                Bulk_ESS Tail_ESS
## sd(Intercept)                      1170     1874
## sd(sigma_Intercept)                1091     1415
## cor(Intercept,sigma_Intercept)      667     1379
## 
## Regression Coefficients:
##                 Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept           0.02      0.02    -0.01     0.06 1.00      746     1433
## sigma_Intercept    -1.84      0.10    -2.05    -1.64 1.01      960     1844
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

4.3.1 Visualzing meta-analytic results

Code
# getting plots
plots_fit_ma2 <- list(
  visualize_fixed_effects(fit_ma2),
  visualize_random_effects(fit_ma2),
  visualize_correlations(fit_ma2)
)


plots_fit_ma2[[1]] / plots_fit_ma2[[2]] / plots_fit_ma2[[3]]

TODO - probably try to get - CV and I2????

4.4 Location-scale meta-regression with a contnious moderator

Code
#TODO - may need to rerun this model

# SMD = d 
form_mr3 <- bf(yi
            ~ 1  + elevation_log +
              (1|study_ID) + # this is u
              (1|gr(es_ID, cov = vcv)), # this is m
            sigma ~ 1 + elevation_log
)


# create prior


prior_mr3 <- default_prior(form_mr3, 
                        data = dat, 
                        data2 = list(vcv = vcv),
                        family = gaussian()
)

# fixing the varaince to 1 (meta-analysis)
prior_mr3$prior[5] = "constant(1)"
prior_mr3 

# fit model
fit_mr3 <- brm(form_mr3, 
            data = dat, 
            data2 = list(vcv = vcv),
            chains = 2, 
            cores = 2, 
            iter = 6000, 
            warmup = 3000,
            prior = prior_mr3,
            control = list(adapt_delta = 0.95, max_treedepth = 15)
)

# save this as rds

saveRDS(fit_mr3, here("Rdata", "fit_mr3.rds"))
Code
fit_mr3 <- readRDS(here("Rdata", "fit_mr3.rds"))

summary(fit_mr3)
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: yi ~ 1 + elevation_log + (1 | study_ID) + (1 | gr(es_ID, cov = vcv)) 
##          sigma ~ 1 + elevation_log
##    Data: dat (Number of observations: 1294) 
##   Draws: 2 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 6000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 1294) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     1.00      0.00     1.00     1.00   NA       NA       NA
## 
## ~study_ID (Number of levels: 71) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     0.13      0.02     0.10     0.16 1.00     1104     2260
## 
## Regression Coefficients:
##                     Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept              -0.13      0.05    -0.23    -0.02 1.00     2213     3632
## sigma_Intercept        -3.73      0.18    -4.09    -3.39 1.00     3566     4319
## elevation_log           0.03      0.01     0.01     0.04 1.00     3332     4231
## sigma_elevation_log     0.36      0.03     0.30     0.42 1.00     3744     4498
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

4.4.1 Visualzing meta-regression results

Code
# getting plots
plots_fit_mr3 <- list(
  visualize_fixed_effects(fit_mr3),
  visualize_random_effects(fit_mr3)
)


plots_fit_mr3[[1]] / plots_fit_mr3[[2]]

4.4.2 Visualzing effect sizes with a bubble plot

Code
# using metafor and orchaRd
# see - Nakagawa S, Lagisz M, O'Dea RE, Pottier P, Rutkowska J, Senior AM, Yang Y, Noble DW. orchaRd 2.0: An R package for visualising meta-analyses with orchard plots.
mr3_mod <- rma.mv(yi = yi, 
                  V =  vcv,
                  mod = ~ elevation_log,
                  random = list(~1 | study_ID,
                                ~1 | es_ID),
                  #struct = "DIAG",
                  data = dat, 
                  test = "t",
                  sparse = TRUE,
                  control=list(optimizer="optim", optmethod="Nelder-Mead")
)

bubble_plot(mr3_mod, mod = "elevation_log", group = "study_ID", g = TRUE, 
            xlab = "ln(elevation) (moderator)",
            ylab = "SDM (effect size)",
            legend.pos = "bottom.left")

One can clearly see an increase in variation as evvelation_log goes up.

5 Example 3: Publication Bias (Small-Study, Decline, Proteus)

Data source (illustrative): Neuschulz, E. L. et al. (2016). Pollination and seed dispersal are the most threatened processes of plant regeneration. Scientific Reports, 6, 29839.

We illustrate how to incorporate se (or ()) and cyear (centered publication year) in both the location and scale parts:

6 Loading the data

This dataset (seed.csv) has an effect size (SND) and sampling variance. Two key moderators are: (1) se (standard error for sampling error to model the small-study effect and divergence) and cyear (centred publication year to model the decline effect and Proteus effect).

Code
dat <- read.csv(here("data", "seed.csv"))

dat$study_ID <- as.factor(dat$study)
dat$es_ID <- as.factor(1:nrow(dat))
dat$se <- sqrt(dat$var.eff.size) # SE (sampling error SD)
dat$smd <- dat$eff.size # effect isze
dat$cyear <- scale(dat$study.year, center = TRUE, scale = FALSE) # centred year


# selecting varaibles we need
dat <- dat %>% select(smd, var.eff.size, es_ID, study_ID, se, cyear)

#head(dat)
str(dat)
## 'data.frame':    98 obs. of  6 variables:
##  $ smd         : num  -0.65 2.24 -1.39 -1.08 0.42 0.34 -0.05 -5.53 -1.07 -0.46 ...
##  $ var.eff.size: num  0.6 0.23 0.17 0.16 0.14 0.14 0.13 0.52 0.49 0.42 ...
##  $ es_ID       : Factor w/ 98 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
##  $ study_ID    : Factor w/ 40 levels "Albrecht et al. 2012",..: 34 20 20 20 20 20 20 33 40 40 ...
##  $ se          : num  0.775 0.48 0.412 0.4 0.374 ...
##  $ cyear       : num [1:98, 1] -13.59 -7.59 -7.59 -7.59 -7.59 ...
##   ..- attr(*, "scaled:center")= num 2008

6.1 Creating a variance-covariance matrix for sampling errors

6.1.1 Assuming indepedence among effect sizes

If each effect size has an independent sampling error, we can build a diagonal matrix:

Code
vcv <- diag(dat$var.eff.size)
rownames(vcv) <- dat$es_ID
colnames(vcv) <- dat$es_ID

6.2 Multilevel location-scale meta-analysis with correlation

Code
#TODO - may need to rerun this model

form_ma3 <- bf(smd
            ~ 1   +
              (1|study_ID) + # this is u_l
              (1|gr(es_ID, cov = vcv)), # this is m
            sigma ~ 1 + (1|study_ID)
)



prior_ma3 <- default_prior(form_ma3, 
                        data = dat, 
                        data2 = list(vcv = vcv),
                        family = gaussian()
)


# fixing the varaince to 1 (meta-analysis)
prior_ma3$prior[3] = "constant(1)"
prior_ma3

# fit model
fit_ma3 <- brm(form_ma3, 
            data = dat, 
            data2 = list(vcv = vcv),
            chains = 2, 
            cores = 2, 
            iter = 30000, 
            warmup = 5000,
            prior = prior_ma3,
            control = list(adapt_delta = 0.99, max_treedepth = 15)
)

summary(fit_ma3)

# save this as rds

saveRDS(fit_ma3, here("Rdata", "fit_ma3.rds"))
Code
fit_ma3 <- readRDS(here("Rdata", "fit_ma3.rds"))

summary(fit_ma3)
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: smd ~ 1 + (1 | p | study_ID) + (1 | gr(es_ID, cov = vcv)) 
##          sigma ~ 1 + (1 | p | study_ID)
##    Data: dat (Number of observations: 98) 
##   Draws: 2 chains, each with iter = 35000; warmup = 5000; thin = 1;
##          total post-warmup draws = 60000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 98) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     1.63      0.23     1.10     1.85 2.09        4       54
## 
## ~study_ID (Number of levels: 40) 
##                                Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                      0.58      0.15     0.36     0.96 1.69
## sd(sigma_Intercept)                1.97      0.82     0.18     2.78 1.91
## cor(Intercept,sigma_Intercept)     0.46      0.61    -0.87     0.98 2.01
##                                Bulk_ESS Tail_ESS
## sd(Intercept)                         8       39
## sd(sigma_Intercept)                   3       25
## cor(Intercept,sigma_Intercept)        3       15
## 
## Regression Coefficients:
##                 Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept          -0.35      0.13    -0.70    -0.15 1.74       15       37
## sigma_Intercept    -4.52      2.21    -6.78    -0.92 2.03        3       36
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

The parameters are not mixing well and we have covergence issues which are also clear from the fig below.

6.2.1 Visualzing meta-analytic results (without correlation)

Code
# getting plots
plots_fit_ma3 <- list(
  visualize_fixed_effects(fit_ma3),
  visualize_random_effects(fit_ma3),
  visualize_correlations(fit_ma3)
)


plots_fit_ma3[[1]] / plots_fit_ma3[[2]] / plots_fit_ma3[[3]]

6.3 Multilevel location-scale meta-analysis without the between-study corrleation

Code
#TODO - may need to rerun this model

# no correlation 
# this runs but the one with correlation does not mix well

form_ma4 <- bf(smd
            ~ 1   +
              (1|study_ID) + # this is u
              (1|gr(es_ID, cov = vcv)), # this is m
            sigma ~ 1 + (1|study_ID)
)



prior_ma4 <- default_prior(form_ma4, 
                        data = dat, 
                        data2 = list(vcv = vcv),
                        family = gaussian()
)


# fixing the varaince to 1 (meta-analysis)
prior_ma4$prior[3] = "constant(1)"
prior_ma4 
# fit model

fit_ma4 <- brm(form_ma4, 
            data = dat, 
            data2 = list(vcv = vcv),
            chains = 2, 
            cores = 2, 
            iter = 30000, 
            warmup = 5000,
            #backend = "cmdstanr",
            prior = prior_ma4,
            #threads = threading(9),
            control = list(adapt_delta = 0.99, max_treedepth = 15)
)

summary(fit_ma4)

# save this as rds

saveRDS(fit4, here("Rdata", "fit_ma4.rds"))
Code
fit_ma4 <- readRDS(here("Rdata", "fit_ma4.rds"))

summary(fit_ma4)
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: smd ~ 1 + (1 | study_ID) + (1 | gr(es_ID, cov = vcv)) 
##          sigma ~ 1 + (1 | study_ID)
##    Data: dat (Number of observations: 98) 
##   Draws: 2 chains, each with iter = 30000; warmup = 5000; thin = 1;
##          total post-warmup draws = 50000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 98) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     1.00      0.00     1.00     1.00   NA       NA       NA
## 
## ~study_ID (Number of levels: 40) 
##                     Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)           0.66      0.15     0.41     0.98 1.00     2164      816
## sd(sigma_Intercept)     1.53      0.51     0.72     2.72 1.02      387      277
## 
## Regression Coefficients:
##                 Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept          -0.46      0.15    -0.77    -0.17 1.00     2895     5918
## sigma_Intercept    -1.40      0.66    -2.97    -0.40 1.02      374      189
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

The parameters are not mixing well and we have covergence issues which are also clear from the fig below.

6.3.1 Visualzing meta-analytic results (without correlation)

Code
# getting plots
plots_fit_ma4 <- list(
  visualize_fixed_effects(fit_ma4),
  visualize_random_effects(fit_ma4),
  visualize_correlations(fit_ma4)
)


plots_fit_ma4[[1]] / plots_fit_ma4[[2]] 

TODO - it is mixing well

6.4 Location-scale meta-regression to model publication bias

Code
#TODO - may need to rerun this model

form_mr4 <- bf(smd
             ~ 1   +
               (1|p|study_ID) + # this is u
               (1|gr(es_ID, cov = vcv)), # this is m
             sigma ~ 1 + (1|p|study_ID)
)



prior_mr4 <- default_prior(form_mr4, 
                         data = dat, 
                         data2 = list(vcv = vcv),
                         family = gaussian()
)

# fixing the varaince to 1 (meta-analysis)
prior4b$prior[5] = "constant(1)"
prior4b 
# fit model

fit_mr4 <- brm(form_mr4, 
             data = dat, 
             data2 = list(vcv = vcv),
             chains = 2, 
             cores = 2, 
             iter =35000, 
             warmup = 5000,
             #backend = "cmdstanr",
             prior = prior_mr4,
             #threads = threading(9),
             control = list(adapt_delta = 0.99, max_treedepth = 15)
)

summary(fit_mr4)

# save this as rds

saveRDS(fit_mr4, here("Rdata", "fit_mr4.rds"))
Code
fit_mr4 <- readRDS(here("Rdata", "fit_mr4.rds"))

summary(fit_mr4)
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: smd ~ 1 + cyear + se + (1 | study_ID) + (1 | gr(es_ID, cov = vcv)) 
##          sigma ~ 1 + cyear + se
##    Data: dat (Number of observations: 98) 
##   Draws: 2 chains, each with iter = 35000; warmup = 5000; thin = 1;
##          total post-warmup draws = 60000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 98) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     1.00      0.00     1.00     1.00   NA       NA       NA
## 
## ~study_ID (Number of levels: 40) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     0.66      0.13     0.44     0.95 1.00    15622    27363
## 
## Regression Coefficients:
##                 Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept           0.10      0.33    -0.53     0.75 1.00    19025    27181
## sigma_Intercept    -2.30      0.61    -3.59    -1.19 1.00     1933     2693
## cyear              -0.04      0.04    -0.12     0.04 1.00    28664    36930
## se                 -0.89      0.58    -2.06     0.23 1.00    15695    22795
## sigma_cyear        -0.19      0.06    -0.31    -0.09 1.00     1317     1709
## sigma_se            2.13      0.67     0.74     3.41 1.00     4982     6298
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

6.4.1 Visualzing meta-regression results

Code
# getting plots
plots_fit_mr4 <- list(
  visualize_fixed_effects(fit_mr4),
  visualize_random_effects(fit_mr4)
)


plots_fit_mr4[[1]] / plots_fit_mr4[[2]]

6.4.2 Visualzing effect sizes with a bubble plot

Code
# using metafor and orchaRd
# see - Nakagawa S, Lagisz M, O'Dea RE, Pottier P, Rutkowska J, Senior AM, Yang Y, Noble DW. orchaRd 2.0: An R package for visualising meta-analyses with orchard plots.
mr4_mod <- rma.mv(yi = smd, 
                  V =  vcv,
                  mod = ~ se + cyear,
                  random = list(~1 | study_ID,
                                ~1 | es_ID),
                  #struct = "DIAG",
                  data = dat, 
                  test = "t",
                  sparse = TRUE,
                  control=list(optimizer="optim", optmethod="Nelder-Mead")
)

bubble_plot(mr4_mod, mod = "se", group = "study_ID", g = TRUE, 
            xlab = "Standard error (SE: moderator)",
            ylab = "SDM (effect size)",
            legend.pos = "bottom.left")

Code

bubble_plot(mr4_mod, mod = "cyear", group = "study_ID", g = TRUE, 
            xlab = "centered publicaiton year (moderator)",
            ylab = "SDM (effect size)",
            legend.pos = "bottom.left")

One …..

7 Comparing brms results with other packages (metafor and blsmeta)

Here we compare results of three packages. Note the followings:

  • metafor can only ran the random-effect meta-regression
  • blsmeta can run multilevel but only to the 3 levels but it cannot take random effects on the scale part. Instead, blsmeta can have regression formulas for each of variance components (apart from the sampling error variance; this is a different model and conceptualization from our proposed model). For the details reading the following two papers

Rodriguez JE, Williams DR, Bürkner PC. Heterogeneous heterogeneity by default: Testing categorical moderators in mixed‐effects meta‐analysis. British Journal of Mathematical and Statistical Psychology. 2023 May;76(2):402-33.

Williams DR, Rodriguez JE, Bürkner PC. Putting variation into variance: modeling between-study heterogeneity in meta-analysis. PsyArXiv 2024 https://osf.io/preprints/psyarxiv/9vkqy

This section have three parts: 1) comparing results of the random-effects meta-analyses (not location-scale models) using metafor and brms (we show two different ways in metafor and three different ways in brms to do this), 2) comparing results of ranodm-effects location-scale meta-regression models using metafor and brms and 3) comparing mulitilevel (3 level) location-scale meta-regression using blsmeta and brms.

7.1 The random-effects meta-analysis with metafor and brms

7.1.1 Loading the data

Here and below, we use the dataset from Example 1.

Code
dat <- read.csv(here("data", "thermal.csv"))

# selecting varaibles we need
dat <- dat %>% select(dARR, Var_dARR, es_ID, population_ID, study_ID, exp_design, habitat)
dat$study_ID <- as.numeric(dat$study_ID)
# creating SE (sqrt(Var))
dat$si <- sqrt(dat$Var_dARR)

# creating a new variable (method) according to the oringal paper
dat$method <- ifelse(dat$exp_design == c("A", "B", "C"), "initial", "persisitent")

#head(dat)
str(dat)
## 'data.frame':    1089 obs. of  9 variables:
##  $ dARR         : num  0.0528 0.0428 0.054 0.0115 0.0503 ...
##  $ Var_dARR     : num  0.000171 0.000192 0.000239 0.000131 0.000175 ...
##  $ es_ID        : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ population_ID: int  1 1 2 2 2 3 3 3 3 3 ...
##  $ study_ID     : num  1 1 1 1 1 2 2 2 2 2 ...
##  $ exp_design   : chr  "D" "D" "D" "D" ...
##  $ habitat      : chr  "terrestrial" "terrestrial" "terrestrial" "terrestrial" ...
##  $ si           : num  0.0131 0.0139 0.0154 0.0114 0.0132 ...
##  $ method       : chr  "persisitent" "persisitent" "persisitent" "persisitent" ...

7.1.2 Creating a variance-covariance matrix for sampling errors

Code
# assuming indepdence of each effect size like above
vcv <- diag(dat$Var_dARR)
rownames(vcv) <- dat$es_ID
colnames(vcv) <- dat$es_ID

The following six models should and will produce the same results (apart form MCMD errors)

7.1.3 Using metafor 1

Code
# random effects - default
fit_ma5 <- rma(yi = dARR, 
           vi = Var_dARR, 
           test="t",
           data = dat)

summary(fit_ma5)
## 
## Random-Effects Model (k = 1089; tau^2 estimator: REML)
## 
##    logLik   deviance        AIC        BIC       AICc   
## -286.0953   572.1907   576.1907   586.1749   576.2018   
## 
## tau^2 (estimated amount of total heterogeneity): 0.0743 (SE = 0.0036)
## tau (square root of estimated tau^2 value):      0.2726
## I^2 (total heterogeneity / total variability):   99.34%
## H^2 (total variability / sampling variability):  151.25
## 
## Test for Heterogeneity:
## Q(df = 1088) = 53884.2944, p-val < .0001
## 
## Model Results:
## 
## estimate      se     tval    df    pval   ci.lb   ci.ub      
##   0.1708  0.0089  19.2520  1088  <.0001  0.1534  0.1882  *** 
## 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

7.1.4 Using metafor 2

Code
# random effects - default
fit_ma6 <- rma.mv(yi = dARR, 
              V = vcv,
              random = ~ 1 | es_ID,
              test="t",
              data = dat)

summary(fit_ma6)
## 
## Multivariate Meta-Analysis Model (k = 1089; method: REML)
## 
##    logLik   Deviance        AIC        BIC       AICc   
## -286.0953   572.1907   576.1907   586.1749   576.2018   
## 
## Variance Components:
## 
##             estim    sqrt  nlvls  fixed  factor 
## sigma^2    0.0743  0.2726   1089     no   es_ID 
## 
## Test for Heterogeneity:
## Q(df = 1088) = 53884.2944, p-val < .0001
## 
## Model Results:
## 
## estimate      se     tval    df    pval   ci.lb   ci.ub      
##   0.1708  0.0089  19.2519  1088  <.0001  0.1534  0.1882  *** 
## 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

7.1.5 Using brms 1

Code
form_ma7 <- bf(dARR | se(si) ~ 1 +  (1|es_ID) # this is e
)

prior_ma7 <- default_prior(form_ma7, 
                        data = dat, 
                        #data2 = list(vcv = vcv),
                        family = gaussian()
)

fit_ma7 <- brm(form_ma7, 
            data = dat, 
            chains = 2, 
            cores = 2, 
            iter = 35000, 
            warmup = 5000,
            prior = prior_ma7,
            control = list(adapt_delta = 0.99, max_treedepth = 20)
)


# save as rds
saveRDS(fit_ma7, here("Rdata", "fit_ma7.rds"))
Code
# load rds
fit_ma7 <- readRDS(here("Rdata", "fit_ma7.rds"))
print(summary(fit_ma7), digits = 4)
##  Family: gaussian 
##   Links: mu = identity; sigma = identity 
## Formula: dARR | se(si) ~ 1 + (1 | es_ID) 
##    Data: dat (Number of observations: 1089) 
##   Draws: 2 chains, each with iter = 35000; warmup = 5000; thin = 1;
##          total post-warmup draws = 60000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 1089) 
##               Estimate Est.Error l-95% CI u-95% CI   Rhat Bulk_ESS Tail_ESS
## sd(Intercept)   0.2727    0.0071   0.2590   0.2869 1.0007     2735     5819
## 
## Regression Coefficients:
##           Estimate Est.Error l-95% CI u-95% CI   Rhat Bulk_ESS Tail_ESS
## Intercept   0.1712    0.0089   0.1533   0.1881 1.0015      664     1114
## 
## Further Distributional Parameters:
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma   0.0000    0.0000   0.0000   0.0000   NA       NA       NA
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

7.1.6 Using brms 2

Code
# brms 2
#------------
form_ma8 <- bf(dARR  ~ 1 +  (1|es_ID) # this is e
            + fcor(vcv)
)

prior_ma8 <- default_prior(form_ma8, 
                        data = dat, 
                        data2 = list(vcv = vcv),
                        family = gaussian()
)


# we need to fix the variance to 1 = fcor(vcv)
prior_ma8$prior[5] = "constant(1)"
prior_ma8 

fit_ma8 <- brm(form_ma8, 
           data = dat, 
           data2 = list(vcv = vcv),
           chains = 2, 
           cores = 2, 
           iter = 15000, 
           warmup = 5000,
           prior = prior_ma8,
           control = list(adapt_delta = 0.99, max_treedepth = 20)
)

# save as rds
saveRDS(fit_ma8, here("Rdata", "fit_ma8.rds"))
Code
# load rds
fit_ma8 <- readRDS(here("Rdata", "fit_ma8.rds"))
print(summary(fit_ma8), digit = 4)
##  Family: gaussian 
##   Links: mu = identity; sigma = identity 
## Formula: dARR ~ 1 + (1 | es_ID) + fcor(vcv) 
##    Data: dat (Number of observations: 1089) 
##   Draws: 2 chains, each with iter = 15000; warmup = 5000; thin = 1;
##          total post-warmup draws = 20000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 1089) 
##               Estimate Est.Error l-95% CI u-95% CI   Rhat Bulk_ESS Tail_ESS
## sd(Intercept)   0.2733    0.0069   0.2602   0.2871 1.0008     1382     2664
## 
## Regression Coefficients:
##           Estimate Est.Error l-95% CI u-95% CI   Rhat Bulk_ESS Tail_ESS
## Intercept   0.1699    0.0088   0.1526   0.1872 1.0097      302      878
## 
## Further Distributional Parameters:
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma   1.0000    0.0000   1.0000   1.0000   NA       NA       NA
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

7.1.7 Using brms 3

Code
# brm 3 (using the trick expalined above)
#------------
form_ma9 <- bf(dARR  ~ 1 +   
              (1|gr(es_ID, cov = vcv)), # this is m
            # residual will be es_ID SD
)

prior_ma9 <- default_prior(form_ma9, 
                        data = dat, 
                        data2 = list(vcv = vcv),
                        family = gaussian()
)

# fixing the varaince to 1 (meta-analysis)
prior_ma9$prior[3] = "constant(1)"
prior_ma9 

fit_ma9 <- brm(form_ma9, 
           data = dat, 
           data2 = list(vcv = vcv),
           chains = 2, 
           cores = 2, 
           iter = 3000, 
           warmup = 2000,
           prior = prior_ma9,
           #control = list(adapt_delta = 0.99, max_treedepth = 20)
)

saveRDS(fit_ma9, here("Rdata", "ma4.rds"))

Note that this model coverged very quickly esp compared to the first and second models

Code
# load rds
fit_ma9 <- readRDS(here("Rdata", "fit_ma9.rds"))
print(summary(fit_ma9), digit = 4)
##  Family: gaussian 
##   Links: mu = identity; sigma = identity 
## Formula: dARR ~ 1 + (1 | gr(es_ID, cov = vcv)) 
##    Data: dat (Number of observations: 1089) 
##   Draws: 2 chains, each with iter = 3000; warmup = 2000; thin = 1;
##          total post-warmup draws = 2000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 1089) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)   1.0000    0.0000   1.0000   1.0000   NA       NA       NA
## 
## Regression Coefficients:
##           Estimate Est.Error l-95% CI u-95% CI   Rhat Bulk_ESS Tail_ESS
## Intercept   0.1708    0.0089   0.1539   0.1879 1.0043     3626     1666
## 
## Further Distributional Parameters:
##       Estimate Est.Error l-95% CI u-95% CI   Rhat Bulk_ESS Tail_ESS
## sigma   0.2728    0.0067   0.2599   0.2866 1.0049     1891     1439
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

7.2 The random-effects location-scale meta-regression with metafor and brms

Note that while

7.2.1 Using metafor (habitat)

Code
# metafor
#------------
fit_mr5 <- rma(yi = dARR, 
            vi = Var_dARR, 
            mods = ~ habitat,
            scale = ~ habitat,
            test="t",
            data = dat)

summary(fit_mr5)
## 
## Location-Scale Model (k = 1089; tau^2 estimator: REML)
## 
##    logLik   deviance        AIC        BIC       AICc   
## -196.8163   393.6327   401.6327   421.5974   401.6696   
## 
## Test for Residual Heterogeneity:
## QE(df = 1087) = 45754.7556, p-val < .0001
## 
## Test of Location Coefficients (coefficient 2):
## F(df1 = 1, df2 = 1087) = 92.5536, p-val < .0001
## 
## Model Results (Location):
## 
##                     estimate      se     tval    df    pval    ci.lb    ci.ub 
## intrcpt               0.1904  0.0103  18.4787  1087  <.0001   0.1702   0.2106 
## habitatterrestrial   -0.1277  0.0133  -9.6205  1087  <.0001  -0.1538  -0.1017 
##                         
## intrcpt             *** 
## habitatterrestrial  *** 
## 
## Test of Scale Coefficients (coefficient 2):
## F(df1 = 1, df2 = 1087) = 202.6108, p-val < .0001
## 
## Model Results (Scale):
## 
##                     estimate      se      tval    df    pval    ci.lb    ci.ub 
## intrcpt              -2.4622  0.0553  -44.5370  1087  <.0001  -2.5707  -2.3538 
## habitatterrestrial   -2.2102  0.1553  -14.2341  1087  <.0001  -2.5149  -1.9056 
##                         
## intrcpt             *** 
## habitatterrestrial  *** 
## 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

7.2.2 Using brms (habitat)

Code
form_mr6 <- bf(dARR
            ~ 1   + habitat + 
              (1|gr(es_ID, cov = vcv)), # this is m
            sigma ~ 1 + habitat
)


prior_mr6 <- default_prior(form_mr6, 
                        data = dat, 
                        data2 = list(vcv = vcv),
                        family = gaussian()
)


# fixing the varaince to 1 (meta-analysis)
prior_mr6$prior[5] = "constant(1)"
prior_mr6 
# fit model

fit_mr6 <- brm(form_mr6, 
            data = dat, 
            data2 = list(vcv = vcv),
            chains = 2, 
            cores = 2, 
            iter = 35000, 
            warmup = 5000,
            prior = prior_mr6,
            control = list(adapt_delta = 0.95, max_treedepth = 15)
)

# save as rds

saveRDS(fit_mr6, here("Rdata", "fit_mr6.rds"))
Code
# load the rds

fit_mr6 <- readRDS(here("Rdata", "fit_mr6.rds"))
print(summary(fit_mr6), digit = 4)
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: dARR ~ 1 + habitat + (1 | gr(es_ID, cov = vcv)) 
##          sigma ~ 1 + habitat
##    Data: dat (Number of observations: 1089) 
##   Draws: 2 chains, each with iter = 35000; warmup = 5000; thin = 1;
##          total post-warmup draws = 60000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 1089) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)   1.0000    0.0000   1.0000   1.0000   NA       NA       NA
## 
## Regression Coefficients:
##                          Estimate Est.Error l-95% CI u-95% CI   Rhat Bulk_ESS
## Intercept                  0.1904    0.0104   0.1702   0.2108 1.0000    71642
## sigma_Intercept           -1.2305    0.0276  -1.2843  -1.1760 1.0000    78554
## habitatterrestrial        -0.1278    0.0134  -0.1538  -0.1016 1.0000    70806
## sigma_habitatterrestrial  -1.1034    0.0778  -1.2545  -0.9481 1.0000    62436
##                          Tail_ESS
## Intercept                   49143
## sigma_Intercept             50998
## habitatterrestrial          50385
## sigma_habitatterrestrial    49877
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

7.2.3 Using metafor (method)

Code
fit_mr7 <- rma(yi = dARR, 
            vi = Var_dARR, 
            mods = ~ method,
            scale = ~ method,
            test="t",
            data = dat)

summary(fit_mr7)
## 
## Location-Scale Model (k = 1089; tau^2 estimator: REML)
## 
##    logLik   deviance        AIC        BIC       AICc   
## -269.8555   539.7109   547.7109   567.6756   547.7479   
## 
## Test for Residual Heterogeneity:
## QE(df = 1087) = 50559.9341, p-val < .0001
## 
## Test of Location Coefficients (coefficient 2):
## F(df1 = 1, df2 = 1087) = 29.5207, p-val < .0001
## 
## Model Results (Location):
## 
##                    estimate      se     tval    df    pval    ci.lb    ci.ub 
## intrcpt              0.2443  0.0151  16.1580  1087  <.0001   0.2146   0.2739 
## methodpersisitent   -0.1003  0.0185  -5.4333  1087  <.0001  -0.1365  -0.0641 
##                        
## intrcpt            *** 
## methodpersisitent  *** 
## 
## Test of Scale Coefficients (coefficient 2):
## F(df1 = 1, df2 = 1087) = 7.7152, p-val = 0.0056
## 
## Model Results (Scale):
## 
##                    estimate      se      tval    df    pval    ci.lb    ci.ub 
## intrcpt             -2.8803  0.1023  -28.1691  1087  <.0001  -3.0810  -2.6797 
## methodpersisitent    0.3289  0.1184    2.7776  1087  0.0056   0.0966   0.5613 
##                        
## intrcpt            *** 
## methodpersisitent   ** 
## 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

7.2.4 Using brms (method)

Code
form_mr8 <- bf(dARR
            ~ 1   + method + 
              (1|gr(es_ID, cov = vcv)), # this is m
            sigma ~ 1 + method
)


prior_mr8<- default_prior(form_mr8, 
                        data = dat, 
                        data2 = list(vcv = vcv),
                        family = gaussian()
)


# fixing the varaince to 1 (meta-analysis)
prior_mr8$prior[5] = "constant(1)"
prior_mr8 
# fit model

fit_mr8 <- brm(form_mr8, 
            data = dat, 
            data2 = list(vcv = vcv),
            chains = 2, 
            cores = 2, 
            iter = 35000, 
            warmup = 5000,
            prior = prior_mr8,
            control = list(adapt_delta = 0.95, max_treedepth = 15)
)

# save as rds

saveRDS(fit_mr8, here("Rdata", "fit_mr8.rds"))
Code
# load the rds
fit_mr8 <- readRDS(here("Rdata", "fit_mr8.rds"))
print(summary(fit_mr8), digit = 4)
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: dARR ~ 1 + method + (1 | gr(es_ID, cov = vcv)) 
##          sigma ~ 1 + method
##    Data: dat (Number of observations: 1089) 
##   Draws: 2 chains, each with iter = 35000; warmup = 5000; thin = 1;
##          total post-warmup draws = 60000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 1089) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)   1.0000    0.0000   1.0000   1.0000   NA       NA       NA
## 
## Regression Coefficients:
##                         Estimate Est.Error l-95% CI u-95% CI   Rhat Bulk_ESS
## Intercept                 0.2443    0.0153   0.2143   0.2740 1.0001   118228
## sigma_Intercept          -1.4388    0.0511  -1.5379  -1.3381 1.0000    75795
## methodpersisitent        -0.1003    0.0187  -0.1369  -0.0640 1.0002   112868
## sigma_methodpersisitent   0.1638    0.0593   0.0469   0.2798 1.0000    78709
##                         Tail_ESS
## Intercept                  49547
## sigma_Intercept            52375
## methodpersisitent          48197
## sigma_methodpersisitent    50396
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

7.3 Mulitlevel (3-level) location-scale meta-regression models with brms and blsmeta

TODO - something writing here

Both

7.3.1 Using brms (habitat)

Code
form_mr9 <- bf(dARR
            ~ 1  + habitat +
              (1|study_ID) + # this is u
              (1|gr(es_ID, cov = vcv)), # this is m
            sigma ~ 1 + habitat
)


# create prior

prior_mr9 <- default_prior(form_mr9, 
                        data = dat, 
                        data2 = list(vcv = vcv),
                        family = gaussian()
)

# fixing the varaince to 1 (meta-analysis)
prior_mr9$prior[5] = "constant(1)"
prior_mr9 

# fit model
fit_mr9 <- brm(form_mr9, 
            data = dat, 
            data2 = list(vcv = vcv),
            chains = 2, 
            cores = 2, 
            iter = 3000, 
            warmup = 2000,
            prior = prior_mr9,
            control = list(adapt_delta = 0.95, max_treedepth = 15)
)

# save this as rds
saveRDS(fit_mr9, here("Rdata", "fit_mr91.rds"))
Code
# read in rds
fit_mr9 <- readRDS(here("Rdata", "fit_mr9.rds"))
print(summary(fit_mr9), digit = 4)
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: dARR ~ 1 + habitat + (1 | study_ID) + (1 | gr(es_ID, cov = vcv)) 
##          sigma ~ 1 + habitat
##    Data: dat (Number of observations: 1089) 
##   Draws: 2 chains, each with iter = 3000; warmup = 2000; thin = 1;
##          total post-warmup draws = 2000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 1089) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)   1.0000    0.0000   1.0000   1.0000   NA       NA       NA
## 
## ~study_ID (Number of levels: 150) 
##               Estimate Est.Error l-95% CI u-95% CI   Rhat Bulk_ESS Tail_ESS
## sd(Intercept)   0.1308    0.0126   0.1087   0.1586 1.0014      593      923
## 
## Regression Coefficients:
##                          Estimate Est.Error l-95% CI u-95% CI   Rhat Bulk_ESS
## Intercept                  0.2180    0.0163   0.1860   0.2499 1.0008      532
## sigma_Intercept           -1.3899    0.0305  -1.4489  -1.3282 1.0001     1181
## habitatterrestrial        -0.1570    0.0341  -0.2225  -0.0893 1.0066      350
## sigma_habitatterrestrial  -1.1776    0.0828  -1.3328  -1.0107 1.0020     1166
##                          Tail_ESS
## Intercept                     944
## sigma_Intercept              1249
## habitatterrestrial            707
## sigma_habitatterrestrial     1256
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

7.3.2 Using blsmeta (habitat)

Code
fit_mr10 <- blsmeta(yi = dARR,
               vi = Var_dARR,
               es_id = es_ID,
               study_id = study_ID,
               mods = ~ 1 + habitat,
               mods_scale2 = ~ 1 + habitat,
               data = dat)
#save
saveRDS(fit_mr10, here("Rdata", "fit_mr10.rds"))
Code
# read in rds
fit_mr10  <- readRDS(here("Rdata", "fit_mr10.rds"))
print(summary(fit_mr10), digit = 4)
##                   Length Class     Mode     
## posterior_samples    4   mcmc.list list     
## X_location        2178   -none-    numeric  
## X_location_old    2178   -none-    numeric  
## X_location_means     2   -none-    numeric  
## X_scale2          2178   -none-    numeric  
## X_scale2_old      2178   -none-    numeric  
## X_scale2_means       2   -none-    numeric  
## X_scale3           150   -none-    numeric  
## X_scale3_old       150   -none-    numeric  
## X_scale3_means       1   -none-    numeric  
## args                 8   -none-    call     
## chains               1   -none-    numeric  
## iter                 1   -none-    numeric  
## warmup               1   -none-    numeric  
## prior                1   -none-    character
## model                1   -none-    character
## mods_f               2   formula   call     
## mods_scale2_f        2   formula   call     
## mods_scale3_f        2   formula   call     
## dat_list             4   -none-    list     
## model_code           1   -none-    character
## k                    1   -none-    numeric  
## J                    1   -none-    numeric

7.3.3 Using brms (method)

Code
form_mr12  <- bf(dARR
             ~ 1  + method +
               (1|study_ID) + # this is u
               (1|gr(es_ID, cov = vcv)), # this is m
             sigma ~ 1 + method
)


# create prior

prior_mr11 <- default_prior(form_mr11, 
                         data = dat, 
                         data2 = list(vcv = vcv),
                         family = gaussian()
)

# fixing the varaince to 1 (meta-analysis)
prior_mr11$prior[5] = "constant(1)"
prior_mr11 
# fit model

fit_mr11 <- brm(form_mr11, 
             data = dat, 
             data2 = list(vcv = vcv),
             chains = 2, 
             cores = 2, 
             iter = 3000, 
             warmup = 2000,
             prior = prior_mr11,
             control = list(adapt_delta = 0.99, max_treedepth = 15)
)

summary(fit_mr11)

# save this as rds
saveRDS(fit_mr11, here("Rdata", "fit_mr11.rds"))
Code
# reading rsd
fit_mr11 <- readRDS(here("Rdata", "fit_mr11.rds"))
print(summary(fit_mr11), digit = 4)
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: dARR ~ 1 + method + (1 | study_ID) + (1 | gr(es_ID, cov = vcv)) 
##          sigma ~ 1 + method
##    Data: dat (Number of observations: 1089) 
##   Draws: 2 chains, each with iter = 3000; warmup = 2000; thin = 1;
##          total post-warmup draws = 2000
## 
## Multilevel Hyperparameters:
## ~es_ID (Number of levels: 1089) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)   1.0000    0.0000   1.0000   1.0000   NA       NA       NA
## 
## ~study_ID (Number of levels: 150) 
##               Estimate Est.Error l-95% CI u-95% CI   Rhat Bulk_ESS Tail_ESS
## sd(Intercept)   0.1346    0.0129   0.1105   0.1608 1.0021      666     1020
## 
## Regression Coefficients:
##                         Estimate Est.Error l-95% CI u-95% CI   Rhat Bulk_ESS
## Intercept                 0.2427    0.0187   0.2059   0.2788 1.0036     1106
## sigma_Intercept          -1.6581    0.0606  -1.7789  -1.5377 1.0010     1362
## methodpersisitent        -0.0637    0.0168  -0.0969  -0.0302 1.0008     2779
## sigma_methodpersisitent   0.2267    0.0725   0.0835   0.3693 1.0015     1323
##                         Tail_ESS
## Intercept                   1455
## sigma_Intercept             1671
## methodpersisitent           1525
## sigma_methodpersisitent     1604
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

7.3.4 Using blsmeta (method)

Code
fit_mr12 <- blsmeta(yi = dARR,
                 vi = Var_dARR,
                 es_id = es_ID,
                 study_id = study_ID,
                 mods = ~ 1 + method,
                 mods_scale2 = ~ 1 + method,
                 data = dat)
#save
saveRDS(fit_mr12, here("Rdata", "fit_mr12.rds"))
Code
# read in rds
fit_mr12 <- readRDS(here("Rdata", "fit_mr12.rds"))
print(summary(fit_mr12), digit = 4)
##                   Length Class     Mode     
## posterior_samples    4   mcmc.list list     
## X_location        2178   -none-    numeric  
## X_location_old    2178   -none-    numeric  
## X_location_means     2   -none-    numeric  
## X_scale2          2178   -none-    numeric  
## X_scale2_old      2178   -none-    numeric  
## X_scale2_means       2   -none-    numeric  
## X_scale3           150   -none-    numeric  
## X_scale3_old       150   -none-    numeric  
## X_scale3_means       1   -none-    numeric  
## args                 8   -none-    call     
## chains               1   -none-    numeric  
## iter                 1   -none-    numeric  
## warmup               1   -none-    numeric  
## prior                1   -none-    character
## model                1   -none-    character
## mods_f               2   formula   call     
## mods_scale2_f        2   formula   call     
## mods_scale3_f        2   formula   call     
## dat_list             4   -none-    list     
## model_code           1   -none-    character
## k                    1   -none-    numeric  
## J                    1   -none-    numeric
Note

xxxxx

It’s important…

Important

In …

8 Software and package versions

Code
sessionInfo() %>% pander()

R version 4.4.2 (2024-10-31)

Platform: aarch64-apple-darwin20

locale: en_US.UTF-8||en_US.UTF-8||en_US.UTF-8||C||en_US.UTF-8||en_US.UTF-8

attached base packages: stats, graphics, grDevices, utils, datasets, methods and base

other attached packages: blsmeta(v.0.1.0), metafor(v.4.6-0), numDeriv(v.2016.8-1.1), metadat(v.1.2-0), Matrix(v.1.7-0), brms(v.2.21.0), Rcpp(v.1.0.12), pander(v.0.6.5), orchaRd(v.2.0), patchwork(v.1.2.0), here(v.1.0.1), tidybayes(v.3.0.7), lubridate(v.1.9.3), forcats(v.1.0.0), stringr(v.1.5.1), dplyr(v.1.1.4), purrr(v.1.0.2), readr(v.2.1.5), tidyr(v.1.3.1), tibble(v.3.2.1), ggplot2(v.3.5.1), tidyverse(v.2.0.0) and pacman(v.0.5.1)

loaded via a namespace (and not attached): gridExtra(v.2.3), inline(v.0.3.19), sandwich(v.3.1-0), rlang(v.1.1.4), magrittr(v.2.0.3), multcomp(v.1.4-25), matrixStats(v.1.4.1), compiler(v.4.4.2), mgcv(v.1.9-1), loo(v.2.8.0), vctrs(v.0.6.5), reshape2(v.1.4.4), bayeslincom(v.1.3.0), pkgconfig(v.2.0.3), arrayhelpers(v.1.1-0), fastmap(v.1.2.0), backports(v.1.5.0), labeling(v.0.4.3), utf8(v.1.2.4), rmarkdown(v.2.27), tzdb(v.0.4.0), ggbeeswarm(v.0.7.2), xfun(v.0.45), jsonlite(v.1.8.8), parallel(v.4.4.2), R6(v.2.5.1), stringi(v.1.8.4), StanHeaders(v.2.32.10), rjags(v.4-16), estimability(v.1.5.1), rstan(v.2.32.6), knitr(v.1.48), zoo(v.1.8-12), bayesplot(v.1.11.1), splines(v.4.4.2), timechange(v.0.3.0), tidyselect(v.1.2.1), rstudioapi(v.0.16.0), abind(v.1.4-5), yaml(v.2.3.9), codetools(v.0.2-20), pkgbuild(v.1.4.4), lattice(v.0.22-6), plyr(v.1.8.9), withr(v.3.0.0), bridgesampling(v.1.1-2), posterior(v.1.6.0), coda(v.0.19-4.1), evaluate(v.0.24.0), survival(v.3.7-0), RcppParallel(v.5.1.9), ggdist(v.3.3.2), pillar(v.1.9.0), tensorA(v.0.36.2.1), checkmate(v.2.3.2), stats4(v.4.4.2), distributional(v.0.5.0), generics(v.0.1.3), rprojroot(v.2.0.4), mathjaxr(v.1.6-0), hms(v.1.1.3), rstantools(v.2.4.0), munsell(v.0.5.1), scales(v.1.3.0), xtable(v.1.8-4), glue(v.1.7.0), emmeans(v.1.10.4), tools(v.4.4.2), mvtnorm(v.1.2-5), grid(v.4.4.2), QuickJSR(v.1.3.1), colorspace(v.2.1-0), nlme(v.3.1-165), beeswarm(v.0.4.0), vipor(v.0.4.7), latex2exp(v.0.9.6), cli(v.3.6.3), fansi(v.1.0.6), svUnit(v.1.0.6), Brobdingnag(v.1.2-9), gtable(v.0.3.5), digest(v.0.6.36), TH.data(v.1.1-2), htmlwidgets(v.1.6.4), farver(v.2.1.2), htmltools(v.0.5.8.1), lifecycle(v.1.0.4) and MASS(v.7.3-61)